翻訳と辞書 |
Generalized Hebbian Algorithm : ウィキペディア英語版 | Generalized Hebbian Algorithm The Generalized Hebbian Algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons. ==Theory== GHA combines Oja's rule with the Gram-Schmidt process to produce a learning rule of the form :, where defines the synaptic weight or connection strength between the th input and th output neurons, and are the input and output vectors, respectively, and is the ''learning rate'' parameter.
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Generalized Hebbian Algorithm」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|